384 research outputs found
Toward Real Event Detection
News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very dificult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present A methodology of creating such a dataset with the help of crowdsourcing and present preliminary results
unarXive: a large scholarly data set with publications’ full-text, annotated in-text citations, and links to metadata
In recent years, scholarly data sets have been used for various purposes, such as paper recommendation, citation recommendation, citation context analysis, and citation context-based document summarization. The evaluation of approaches to such tasks and their applicability in real-world scenarios heavily depend on the used data set. However, existing scholarly data sets are limited in several regards.
Here, we propose a new data set based on all publications from all scientific disciplines available on arXiv.org. Apart from providing the papers' plain text, in-text citations were annotated via global identifiers. Furthermore, citing and cited publications were linked to the Microsoft Academic Graph, providing access to rich metadata. Our data set consists of over one million documents and 29.2 million citation contexts. The data set, which is made freely available for research purposes, not only can enhance the future evaluation of research paper-based and citation context-based approaches but also serve as a basis for new ways to analyze in-text citations.
See https://github.com/IllDepence/unarXive for the source code which has been used for creating the data set.
For citing our data set and for further information we can refer to our journal article
Tarek Saier, Michael Färber: "unarXive: A Large Scholarly Data Set with Publications’ Full-Text, Annotated In-Text Citations, and Links to Metadata", Scientometrics, 2020, http://dx.doi.org/10.1007/s11192-020-03382-z
Analyzing the Impact of Companies on AI Research Based on Publications
Artificial Intelligence (AI) is one of the most momentous technologies of our
time. Thus, it is of major importance to know which stakeholders influence AI
research. Besides researchers at universities and colleges, researchers in
companies have hardly been considered in this context. In this article, we
consider how the influence of companies on AI research can be made measurable
on the basis of scientific publishing activities. We compare academic- and
company-authored AI publications published in the last decade and use
scientometric data from multiple scholarly databases to look for differences
across these groups and to disclose the top contributing organizations. While
the vast majority of publications is still produced by academia, we find that
the citation count an individual publication receives is significantly higher
when it is (co-)authored by a company. Furthermore, using a variety of
altmetric indicators, we notice that publications with company participation
receive considerably more attention online. Finally, we place our analysis
results in a broader context and present targeted recommendations to safeguard
a harmonious balance between academia and industry in the realm of AI research.Comment: Published in Scientometric
- …